6 research outputs found

    A Roadmap for HEP Software and Computing R&D for the 2020s

    Get PDF
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe

    Heterogeneous data-processing optimization with CLARA’s adaptive workflow orchestrator

    No full text
    The hardware landscape used in HEP and NP is changing from homogeneous multi-core systems towards heterogeneous systems with many different computing units, each with their own characteristics. To achieve maximum performance with data processing, the main challenge is to place the right computing on the right hardware. In this paper, we discuss CLAS12 charge particle tracking workflow orchestration that allows us to utilize both CPU and GPU to improve the performance. The tracking application algorithm was decomposed into micro-services that are deployed on CPU and GPU processing units, where the best features of both are intelligently combined to achieve maximum performance. In this heterogeneous environment, CLARA aims to match the requirements of each micro-service to the strength of a CPU or a GPU architecture. A predefined execution of a micro-service on a CPU or a GPU may not be the most optimal solution due to the streaming data-quantum size and the data-quantum transfer latency between CPU and GPU. So, the CLARA workflow orchestrator is designed to dynamically assign micro-service execution to a CPU or a GPU, based on the online benchmark results analyzed for a period of real-time data-processing

    Streaming Readout of the CLAS12 Forward Tagger Using TriDAS and JANA2

    No full text
    An effort is underway to develop streaming readout data acquisition system for the CLAS12 detector in Jefferson Lab’s experimental Hall-B. Successful beam tests were performed in the spring and summer of 2020 using a 10GeV electron beam from Jefferson Lab’s CEBAF accelerator. The prototype system combined elements of the TriDAS and CODA data acquisition systems with the JANA2 analysis/reconstruction framework. This successfully merged components that included an FPGA stream source, a distributed hit processing system, and software plugins that allowed offline analysis written in C++ to be used for online event filtering. Details of the system design and performance are presented

    Streaming readout for next generation electron scattering experiments

    No full text
    Abstract Current and future experiments at the high-intensity frontier are expected to produce an enormous amount of data that needs to be collected and stored for offline analysis. Thanks to the continuous progress in computing and networking technology, it is now possible to replace the standard ‘triggered’ data acquisition systems with a new, simplified and outperforming scheme. ‘Streaming readout’ (SRO) DAQ aims to replace the hardware-based trigger with a much more powerful and flexible software-based one, that considers the whole detector information for efficient real-time data tagging and selection. Considering the crucial role of DAQ in an experiment, validation with on-field tests is required to demonstrate SRO performance. In this paper, we report results of the on-beam validation of the Jefferson Lab SRO framework. We exposed different detectors (PbWO-based electromagnetic calorimeters and a plastic scintillator hodoscope) to the Hall-D electron-positron secondary beam and to the Hall-B production electron beam, with increasingly complex experimental conditions. By comparing the data collected with the SRO system against the traditional DAQ, we demonstrate that the SRO performs as expected. Furthermore, we provide evidence of its superiority in implementing sophisticated AI-supported algorithms for real-time data analysis and reconstruction

    A Roadmap for HEP Software and Computing R&D for the 2020s

    No full text
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade
    corecore